Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue 274: Add continuous benchmark structure #298

Merged
merged 26 commits into from
Jun 21, 2024
Merged

Conversation

seabbs
Copy link
Collaborator

@seabbs seabbs commented Jun 19, 2024

This PR closes #274.

It adds infrastructure for benchmarking using BenchmarkTools, BenchmarkCI, and PkgBenchmark. I have also added proposed structure for future benchmarks but this is not set in stone. When I originally looked at this I kind of wanted to integrate this with TestItemRunner (and so use out test suite for benchmarks) but I couldn't work out how to do this and also potentially it is better to have more targetted benchmarks?

The action CI for this should:

  • Run on all PRs
  • Cancel currently running benchmarks
  • Run benchmarks on the PR code and on main
  • Post a comment comparing these two sets of benchmarks.
  • It only runs on Ubuntu on the latest Julia version. BenchmarkCI doesn't appear to support a build matrix hence this decision (plus saving compute). This means we won't catch platform-specific performance regressions.

This could potentially get a bit noisy/slow so may need to watch how this is working out and go from there.

This can be tested locally in the benchmark environment using:

using PkgBenchmark
benchmarkpkg(Rtwithoutrenewal)

You should also be able to use BenchmarkCI locally to compare commits but I haven't tested this.

Note

This PR does not add an actual benchmark suite. I don't really have a great handle on BenchmarkTools yet and I think this should probably be done in a few issues once we have an idea of the sorts things we want to benchmark (to discuss here). Initially I would propose doing an issue per submodule and benchmarking each generative model + a few combinations and then also have a few integrated models that we benchmark in more detail (in another module/issue?).

@seabbs seabbs requested a review from SamuelBrand1 June 19, 2024 11:52
@SamuelBrand1
Copy link
Collaborator

This looks a really good move.

@seabbs
Copy link
Collaborator Author

seabbs commented Jun 21, 2024

This now works i think but needs testing from main. As it doesn't touch any other code going to merge based on previous partial review

@seabbs seabbs merged commit 8166227 into main Jun 21, 2024
10 of 11 checks passed
@seabbs seabbs deleted the continuous-benchmarks branch June 21, 2024 11:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Continuously benchmark performance
2 participants